我们开发了数据驱动的模型,以预测机器人在社交就餐场景中何时应进食。能够与朋友和家人独立饮食被认为是具有行动不便的人的最令人难忘,最重要的活动之一。机器人可以潜在地帮助这项活动,但是由机器人辅助的喂养是一个多方面的问题,在咬合,咬合时机和咬合转移方面面临挑战。特别是在社交就餐场景中,特别是由于在社交用餐场景中变得唯一挑战性,因为可能会中断社交人类机器人群体的互动。我们的关键见解是,考虑到社交线索的微妙平衡的咬合时序策略可能会导致在社交用餐场景中在机器人辅助喂养过程中进行无缝互动。我们通过收集一个包含30组三人共同饮食的多模式人类尊贵数据集(HHCD)来解决这个问题。我们使用此数据集分析人类人类的赋形行为,并在社交用餐场景中开发咬合时正时预测模型。我们还将这些模型转移到人类机器人的态度方案中。我们的用户研究表明,当我们的算法使用食客之间的多模式社交信号线索来建模时,预测会有所改善。 HHCD数据集,用户研究的视频和代码将在接受后公开发布。
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Structured channel pruning has been shown to significantly accelerate inference time for convolution neural networks (CNNs) on modern hardware, with a relatively minor loss of network accuracy. Recent works permanently zero these channels during training, which we observe to significantly hamper final accuracy, particularly as the fraction of the network being pruned increases. We propose Soft Masking for cost-constrained Channel Pruning (SMCP) to allow pruned channels to adaptively return to the network while simultaneously pruning towards a target cost constraint. By adding a soft mask re-parameterization of the weights and channel pruning from the perspective of removing input channels, we allow gradient updates to previously pruned channels and the opportunity for the channels to later return to the network. We then formulate input channel pruning as a global resource allocation problem. Our method outperforms prior works on both the ImageNet classification and PASCAL VOC detection datasets.
translated by 谷歌翻译
音乐转录涉及音乐源转化为结构化数字格式,是音乐信息检索(MIR)的关键问题。当用计算术语解决这一挑战时,MIR社区遵循两条研究:音乐文档,这是光学识别(OMR)或录音的情况,这就是自动音乐转录(AMT)的情况。上述输入数据的不同性质使这些字段的条件以开发特定于模式的框架。但是,它们在序列标记任务方面的最新定义导致了共同的输出表示形式,从而可以对合并范式进行研究。在这方面,多模式图像和音频音乐转录包括有效结合图像和音频方式传达的信息的挑战。在这项工作中,我们在后期融合级别探讨了这个问题:我们研究了四种组合方法,以便首次合并基于晶格的搜索空间中有关端到端OMR和AMT系统的假设。一系列性能场景获得的结果(相应的单模式模型产生了不同的错误率)显示了这些方法的有趣好处。此外,四种策略中的两种认为显着改善了相应的单峰标准识别框架。
translated by 谷歌翻译
空气污染监测平台在预防和减轻污染影响方面发挥着非常重要的作用。绘图信号处理领域的最新进展使得可以使用图表描述和分析空气污染监测网络。其中一个主要应用是使用传感器的子集重新重建图表中的测量信号。使用来自传感器邻居的信息重建信号可以有助于提高网络数据的质量,示例是用相关的相邻节点的缺失数据填充,或者校正与更准确的相邻传感器的漂移传感器。本文比较了各种类型的图形信号重建方法应用于西班牙空气污染参考站的真实数据集。所考虑的方法是拉普拉斯插值,曲线​​图信号处理低通基的曲线曲线信号重建,以及基于内核的曲线图信号重建,并在测量O3,NO2和PM10的实际空气污染数据集上进行比较。示出了重建污染物信号的方法的能力,以及该重建的计算成本。结果表明了基于基于内核的曲线图信号重建的方法的优越性,以及具有大量低成本传感器的空气污染监测网络中的方法的难度。但是,我们表明可以通过简单的方法克服可扩展性,例如使用聚类算法对网络进行分区。
translated by 谷歌翻译
使用机器学习技术校准低成本传感器是现在广泛使用的方法。虽然在部署低成本传感器的空气质量监测的低成本传感器中仍有许多挑战,但低成本传感器已被证明与高精度仪器相结合。因此,大多数研究专注于使用机器学习应用不同的校准技术。然而,这些模型的成功应用取决于传感器获得的数据的质量,并且已经从传感器采样和数据预处理到传感器本身的校准,从传感器采集过程中支付了很少的关注。在本文中,我们展示了主要的传感器采样参数,它们对基于机器学习的传感器校准的质量的相应影响及其对能源消耗的影响,因此显示了现有的权衡。最后,实验节点上的结果显示了数据采样策略在对流层臭氧,二氧化氮和一氧化氮低成本传感器的校准中的影响。具体地,我们展示了如何最小化感测子系统的占空比的采样策略可以降低功耗,同时保持数据质量。
translated by 谷歌翻译
TimeSeries Partitioning是大多数机器学习驱动的传感器的IOT应用程序的重要步骤。本文介绍了一种采样效率,鲁棒,时序分割模型和算法。我们表明,通过基于最大平均差异(MMD)的分割目标来学习特定于分割目标的表示,我们的算法可以鲁布布地检测不同应用程序的时间序列事件。我们的损耗功能允许我们推断是否从相同的分布(空假设)中绘制了连续的样本序列,并确定拒绝零假设的对之间的变化点(即,来自不同的分布)。我们展示了其在基于环境传感的活动识别的实际IOT部署中的适用性。此外,虽然文献中存在许多关于变更点检测的作品,但我们的模型明显更简单,匹配或优于最先进的方法。我们可以平均地在9-93秒内完全培训我们的模型,而在不同应用程序上的数据的差异很小。
translated by 谷歌翻译
Neural network pruning-the task of reducing the size of a network by removing parameters-has been the subject of a great deal of work in recent years. We provide a meta-analysis of the literature, including an overview of approaches to pruning and consistent findings in the literature. After aggregating results across 81 papers and pruning hundreds of models in controlled conditions, our clearest finding is that the community suffers from a lack of standardized benchmarks and metrics. This deficiency is substantial enough that it is hard to compare pruning techniques to one another or determine how much progress the field has made over the past three decades. To address this situation, we identify issues with current practices, suggest concrete remedies, and introduce ShrinkBench, an open-source framework to facilitate standardized evaluations of pruning methods. We use ShrinkBench to compare various pruning techniques and show that its comprehensive evaluation can prevent common pitfalls when comparing pruning methods.
translated by 谷歌翻译
Real-world robotic grasping can be done robustly if a complete 3D Point Cloud Data (PCD) of an object is available. However, in practice, PCDs are often incomplete when objects are viewed from few and sparse viewpoints before the grasping action, leading to the generation of wrong or inaccurate grasp poses. We propose a novel grasping strategy, named 3DSGrasp, that predicts the missing geometry from the partial PCD to produce reliable grasp poses. Our proposed PCD completion network is a Transformer-based encoder-decoder network with an Offset-Attention layer. Our network is inherently invariant to the object pose and point's permutation, which generates PCDs that are geometrically consistent and completed properly. Experiments on a wide range of partial PCD show that 3DSGrasp outperforms the best state-of-the-art method on PCD completion tasks and largely improves the grasping success rate in real-world scenarios. The code and dataset will be made available upon acceptance.
translated by 谷歌翻译